Distributed Multiagent Optimization: Linear Convergence Rate of ADMM

ثبت نشده
چکیده

We propose a distributed algorithm based on Alternating Direction Method of Multipliers (ADMM) to minimize the sum of locally known convex functions. This optimization problem captures many applications in distributed machine learning and statistical estimation. We provide a novel analysis that shows if the functions are strongly convex and have Lipschitz gradients, then an -optimal solution can be computed with O(κf log(1/ )) iterations, where κf is the condition number of the problem. This is the first paper in the literature that establishes this rate of convergence for distributed ADMM, matching the best known iteration complexity for centralized ADMM. Our analysis also highlights the effect of network structure on the convergence rate.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Online Distributed ADMM on Networks

This paper presents a convergence analysis on distributed Alternating Direction Method of Multipliers (ADMM) for online convex optimization problems under linear constraints. The goal is to distributively optimize a global objective function over a network of decision makers. The global objective function is composed of convex cost functions associated with each agent. The local cost functions,...

متن کامل

Communication-Efficient Distributed Optimization using an Approximate Newton-type Method

We present a novel Newton-type method for distributed optimization, which is particularly well suited for stochastic optimization and learning problems. For quadratic objectives, the method enjoys a linear rate of convergence which provably improves with the data size, requiring an essentially constant number of iterations under reasonable assumptions. We provide theoretical and empirical evide...

متن کامل

Distributed Convex Optimization with Many Convex Constraints

We address the problem of solving convex optimization problems with many convex constraints in a distributed setting. Our approach is based on an extension of the alternating direction method of multipliers (ADMM) that recently gained a lot of attention in the Big Data context. Although it has been invented decades ago, ADMM so far can be applied only to unconstrained problems and problems with...

متن کامل

Nonconvex generalizations of ADMM for nonlinear equality constrained problems

The growing demand on efficient and distributed optimization algorithms for largescale data stimulates the popularity of Alternative Direction Methods of Multipliers (ADMM) in numerous areas, such as compressive sensing, matrix completion, and sparse feature learning. While linear equality constrained problems have been extensively explored to be solved by ADMM, there lacks a generic framework ...

متن کامل

Scalable Stochastic Alternating Direction Method of Multipliers

Alternating direction method of multipliers (ADMM) has been widely used in many applications due to its promising performance to solve complex regularization problems and large-scale distributed optimization problems. Stochastic ADMM, which visits only one sample or a mini-batch of samples each time, has recently been proved to achieve better performance than batch ADMM. However, most stochasti...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015